3 research outputs found

    Unmasking Clever Hans Predictors and Assessing What Machines Really Learn

    Full text link
    Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to well-informed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.Comment: Accepted for publication in Nature Communication

    The computational complexity of understanding binary classifier decisions

    Get PDF
    For a d-ary Boolean function Φ: {0, 1}d → {0, 1} and an assignment to its variables x = (x1, x2, . . . , xd) we consider the problem of finding those subsets of the variables that are sufficient to determine the function value with a given probability δ. This is motivated by the task of interpreting predictions of binary classifiers described as Boolean circuits, which can be seen as special cases of neural networks. We show that the problem of deciding whether such subsets of relevant variables of limited size k ≤ d exist is complete for the complexity class NPPP and thus, generally, unfeasible to solve. We then introduce a variant, in which it suffices to check whether a subset determines the function value with probability at least δ or at most δ − γ for 0 BPP. Finally, we show that finding the minimal set of relevant variables cannot be reasonably approximated, i.e. with an approximation factor d1−α for α > 0, by a polynomial time algorithm unless P = NP. This holds even with the promise of a probability gap
    corecore